ESTIMATION OF HIGH - DIMENSIONAL LOW - RANK MATRICES 1 By Angelika Rohde and Alexandre
نویسندگان
چکیده
Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m×T -matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the numbermT of unknown entries can be much larger than the sample size N . Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink toward a low-rank representation, we investigate penalized least squares estimators with a Schatten-p quasi-norm penalty term, p≤ 1. We study these estimators under two possible assumptions— a modified version of the restricted isometry condition and a uniform bound on the ratio “empirical norm induced by the sampling operator/Frobenius norm.” The main results are stated as nonasymptotic upper bounds on the prediction risk and on the Schatten-q risk of the estimators, where q ∈ [p,2]. The rates that we obtain for the prediction risk are of the form rm/N (for m = T ), up to logarithmic factors, where r is the rank of A. The particular examples of multitask learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product, we derive bounds for the kth entropy numbers of the quasi-convex Schatten class embeddings S p →֒ S 2 , p < 1, which are of independent interest.
منابع مشابه
Sparse structures : statistical theory and practice , Bristol , June 2010
Alexandre Tsybakov (Paris VI, France) Estimation of high-dimensional low rank matrices Suppose that we observe entries or, more generally, linear combinations of entries of an unknown m× T -matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N . Motivated by several application...
متن کاملEstimation of (near) low-rank matrices with noise and high-dimensional scaling
We study an instance of high-dimensional inference in which the goal is to estimate a matrix Θ ∈ R12 on the basis of N noisy observations. The unknown matrix Θ is assumed to be either exactly low rank, or “near” low-rank, meaning that it can be wellapproximated by a matrix with low rank. We consider a standard M -estimator based on regularization by the nuclear or trace norm over matrices, and ...
متن کاملUsing Low-Rank Ensemble Kalman Filters for Data Assimilation with High Dimensional Imperfect Models
Low-rank square-root Kalman filters were developed for the efficient estimation of the state of high dimensional dynamical systems. These filters avoid the huge computational burden of the Kalman filter by approximating the filter’s error covariance matrices by low-rank matrices. Accounting for model errors with these filters would cancel the benefits of the low-rank approximation as the insert...
متن کاملROP: Matrix Recovery via Rank-One Projections
Estimation of low-rank matrices is of significant interest in a range of contemporary applications. In this paper, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small low-rank perturbations. Both u...
متن کاملConfidence Sets for the Optimal Approximating Model -- Bridging a Gap between Adaptive Point Estimation and Confidence Regions
Abstract: In the setting of high-dimensional linear models with Gaussian noise, we investigate the possibility of confidence statements connected to model selection. Although there exist numerous procedures for adaptive point estimation, the construction of adaptive confidence regions is severely limited (cf. Li, 1989). The present paper sheds new light on this gap. We develop exact and adaptiv...
متن کامل